Techniques for proving Asynchronous Convergence results for Markov Chain Monte Carlo methods
نویسندگان
چکیده
Markov Chain Monte Carlo (MCMC) methods such as Gibbs sampling are finding widespread use in applied statistics and machine learning. These often require significant computational power, and are increasingly being deployed on parallel and distributed systems such as compute clusters. Recent work has proposed running iterative algorithms such as gradient descent and MCMC in parallel asynchronously for increased performance, with good empirical results in certain problems. Unfortunately, for MCMC this parallelization technique requires new convergence theory, as it has been explicitly demonstrated to lead to divergence on some examples. Recent theory on Asynchronous Gibbs sampling describes why these algorithms can fail, and provides a way to alter them to make them converge. In this article, we describe how to apply this theory in a generic setting, to understand the asynchronous behavior of any MCMC algorithm, including those implemented using parameter servers, and those not based on Gibbs sampling.
منابع مشابه
Theoretical rates of convergence forMarkov chain Monte
We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for spe-ciic models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obtaining bounds more generally.
متن کاملTheoretical rates of convergence for Markov chain Monte Carlo
We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for specific models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obtaining bounds more generally.
متن کاملAssessing Convergence of Markov Chain Monte Carlo Algorithms
We motivate the use of convergence diagnostic techniques for Markov Chain Monte Carlo algorithms and review various methods proposed in the MCMC literature. A common notation is established and each method is discussed with particular emphasis on implementational issues and possible extensions. The methods are compared in terms of their interpretability and applicability and recommendations are...
متن کاملComparisons of Markov Chain Monte Carlo Convergence Diagnostic Tests for Bayesian Logistic Random Effect Models
In mixed models, posterior densities are too difficult to work with directly. With the Markov chain Monte Carlo (MCMC) methods, to do statistical inference requires the convergence of the MCMC chain to its stationary distribution. To assess convergence of Markov chain has not a specific way. Assessing convergence of Markov chain has been developed many techniques. Although increasingly populari...
متن کاملA Stochastic algorithm to solve multiple dimensional Fredholm integral equations of the second kind
In the present work, a new stochastic algorithm is proposed to solve multiple dimensional Fredholm integral equations of the second kind. The solution of the integral equation is described by the Neumann series expansion. Each term of this expansion can be considered as an expectation which is approximated by a continuous Markov chain Monte Carlo method. An algorithm is proposed to sim...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1711.06719 شماره
صفحات -
تاریخ انتشار 2017